翻訳と辞書
Words near each other
・ Kernel
・ Kernel (algebra)
・ Kernel (category theory)
・ Kernel (digital media company)
・ Kernel (EP)
・ Kernel (image processing)
・ Kernel (linear algebra)
・ Kernel (operating system)
・ Kernel (set theory)
・ Kernel (statistics)
・ Kernel adaptive filter
・ Kernel debugger
・ Kernel density estimation
・ Kernel eigenvoice
・ Kernel embedding of distributions
Kernel Fisher discriminant analysis
・ Kernel function for solving integral equation of surface radiation exchanges
・ Kernel Independent Transport Layer
・ Kernel marker
・ Kernel method
・ Kernel methods for vector output
・ Kernel Normal Form
・ Kernel panic
・ Kernel patch
・ Kernel Patch Protection
・ Kernel perceptron
・ Kernel preemption
・ Kernel principal component analysis
・ Kernel random forest
・ Kernel regression


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Kernel Fisher discriminant analysis : ウィキペディア英語版
Kernel Fisher discriminant analysis
In statistics, kernel Fisher discriminant analysis (KFD), also known as generalized discriminant analysis and kernel discriminant analysis, is a kernelized version of linear discriminant analysis. It is named after Ronald Fisher. Using the kernel trick, LDA is implicitly performed in a new feature space, which allows non-linear mappings to be learned.
==Linear discriminant analysis==
Intuitively, the idea of LDA is to find a projection where class separation is maximized. Given two sets of labeled data, \mathbf_1 and \mathbf_2, define the class means \mathbf_1 and \mathbf_2 to be
:
\mathbf_i = \frac\sum_^\mathbf_n^i,

where l_i is the number of examples of class \mathbf_i. The goal of linear discriminant analysis is to give a large separation of the class means while also keeping the in-class variance small. This is formulated as maximizing
:
J(\mathbf) = \frac}\mathbf_B\mathbf}}\mathbf_W\mathbf},

where \mathbf_B is the between-class covariance matrix and \mathbf_W is the total within-class covariance matrix:
:
\begin
\mathbf_B & = (\mathbf_2-\mathbf_1)(\mathbf_2-\mathbf_1)^_W & = \sum_\sum_^(\mathbf_n^i-\mathbf_i)(\mathbf_n^i-\mathbf_i)^

Differentiating J(\mathbf) with respect to \mathbf, setting equal to zero, and rearranging gives
:
(\mathbf^_B\mathbf)\mathbf_W\mathbf = (\mathbf^_W\mathbf)\mathbf_B\mathbf.

Since we only care about the direction of \mathbf and \mathbf_B\mathbf has the same direction as (\mathbf_2-\mathbf_1) , \mathbf_B\mathbf can be replaced by (\mathbf_2-\mathbf_1) and we can drop the scalars (\mathbf^_B\mathbf) and (\mathbf^_W\mathbf) to give
:
\mathbf \propto \mathbf^_W(\mathbf_2-\mathbf_1).


抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Kernel Fisher discriminant analysis」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.